gibbs error
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget.
fb89705ae6d743bf1e848c206e16a1d7-Reviews.html
Overview: The authors propose the Gibbs error criterion for active learning; seeking the samples that maximize the expected Gibbs error under the current posterior. They propose a greedy algorithm that maximises this criterion (Max-GEC). The objective reduces to maximising a specific instance of the Tsallis entropy of the predictive distribution which is very similar to Maximum Entropy Sampling (MES) which uses the Shannon entropy of the predictive distribution. They consider the non-adaptive, adaptive and batch settings separately, and in each setting they prove using submodularity results that the greedy approach achieves near-maximal performance compared to optimal policy. They show how to implement their fully adaptive policy (approximately) in CRFs with application to named entity recognition, and implement the batch algorithm with a Naive Bayes classifier, with application to a text classification task.
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget. For practical implementations, we provide approximations to the maximum Gibbs error criterion for Bayesian conditional random fields and transductive Naive Bayes. Our experimental results on a named entity recognition task and a text classification task show that the maximum Gibbs error criterion is an effective active learning criterion for noisy models.
- Asia > Singapore (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget.
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
Cuong, Nguyen Viet, Lee, Wee Sun, Ye, Nan, Chai, Kian Ming A., Chieu, Hai Leong
We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget.
Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
Cuong, Nguyen Viet, Lee, Wee Sun, Ye, Nan, Chai, Kian Ming A., Chieu, Hai Leong
We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget. For practical implementations, we provide approximations to the maximum Gibbs error criterion for Bayesian conditional random fields and transductive Naive Bayes. Our experimental results on a named entity recognition task and a text classification task show that the maximum Gibbs error criterion is an effective active learning criterion for noisy models.
- Asia > Singapore (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)